learning sensor multiplexing design
Learning Sensor Multiplexing Design through Back-propagation
Recent progress on many imaging and vision tasks has been driven by the use of deep feed-forward neural networks, which are trained by propagating gradients of a loss defined on the final output, back through the network up to the first layer that operates directly on the image. We propose back-propagating one step further---to learn camera sensor designs jointly with networks that carry out inference on the images they capture. In this paper, we specifically consider the design and inference problems in a typical color camera---where the sensor is able to measure only one color channel at each pixel location, and computational inference is required to reconstruct a full color image. We learn the camera sensor's color multiplexing pattern by encoding it as layer whose learnable weights determine which color channel, from among a fixed set, will be measured at each location. These weights are jointly trained with those of a reconstruction network that operates on the corresponding sensor measurements to produce a full color image. Our network achieves significant improvements in accuracy over the traditional Bayer pattern used in most color cameras. It automatically learns to employ a sparse color measurement approach similar to that of a recent design, and moreover, improves upon that design by learning an optimal layout for these measurements.
Reviews: Learning Sensor Multiplexing Design through Back-propagation
Paper is nicely written, methodology is easy to follow, and validation shows the benefit of the proposed approach. Comments below: One main question that could arise in learning automatically a color filter pattern, is the dependence on the dataset. Would the use of a different heterogeneous dataset produce a completely different sensor pattern? If the pattern changes significantly between training sets, would it be relevant to question the robustness of a chosen pattern? On the same note, how could this impact camera designs, ie., could a universal pattern exist, or would there be multiple patterns, each designed for particular situations (could patterns designed for outdoor, indoor, urban or nature settings, produce a much better image quality than using a universal pattern?)
Learning Sensor Multiplexing Design through Back-propagation
Recent progress on many imaging and vision tasks has been driven by the use of deep feed-forward neural networks, which are trained by propagating gradients of a loss defined on the final output, back through the network up to the first layer that operates directly on the image. We propose back-propagating one step further---to learn camera sensor designs jointly with networks that carry out inference on the images they capture. In this paper, we specifically consider the design and inference problems in a typical color camera---where the sensor is able to measure only one color channel at each pixel location, and computational inference is required to reconstruct a full color image. We learn the camera sensor's color multiplexing pattern by encoding it as layer whose learnable weights determine which color channel, from among a fixed set, will be measured at each location. These weights are jointly trained with those of a reconstruction network that operates on the corresponding sensor measurements to produce a full color image.